Goto

Collaborating Authors

 quality indicator


Image Quality Assessment for Embodied AI

Li, Chunyi, Xiao, Jiaohao, Zhang, Jianbo, Wen, Farong, Zhang, Zicheng, Tian, Yuan, Zhu, Xiangyang, Liu, Xiaohong, Cheng, Zhengxue, Lin, Weisi, Zhai, Guangtao

arXiv.org Artificial Intelligence

Embodied AI has developed rapidly in recent years, but it is still mainly deployed in laboratories, with various distortions in the Real-world limiting its application. Traditionally, Image Quality Assessment (IQA) methods are applied to predict human preferences for distorted images; however, there is no IQA method to assess the usability of an image in embodied tasks, namely, the perceptual quality for robots. To provide accurate and reliable quality indicators for future embodied scenarios, we first propose the topic: IQA for Embodied AI. Specifically, we (1) based on the Mertonian system and meta-cognitive theory, constructed a perception-cognition-decision-execution pipeline and defined a comprehensive subjective score collection process; (2) established the Embodied-IQA database, containing over 36k reference/distorted image pairs, with more than 5m fine-grained annotations provided by Vision Language Models/Vision Language Action-models/Real-world robots; (3) trained and validated the performance of mainstream IQA methods on Embodied-IQA, demonstrating the need to develop more accurate quality indicators for Embodied AI. We sincerely hope that through evaluation, we can promote the application of Embodied AI under complex distortions in the Real-world. Project page: https://github.com/lcysyzxdxc/EmbodiedIQA


StableMotion: Training Motion Cleanup Models with Unpaired Corrupted Data

Mu, Yuxuan, Ling, Hung Yu, Shi, Yi, Ojeda, Ismael Baira, Xi, Pengcheng, Shu, Chang, Zinno, Fabio, Peng, Xue Bin

arXiv.org Artificial Intelligence

Motion capture (mocap) data often exhibits visually jarring artifacts due to inaccurate sensors and post-processing. Cleaning this corrupted data can require substantial manual effort from human experts, which can be a costly and time-consuming process. Previous data-driven motion cleanup methods offer the promise of automating this cleanup process, but often require in-domain paired corrupted-to-clean training data. Constructing such paired datasets requires access to high-quality, relatively artifact-free motion clips, which often necessitates laborious manual cleanup. In this work, we present StableMotion, a simple yet effective method for training motion cleanup models directly from unpaired corrupted datasets that need cleanup. The core component of our method is the introduction of motion quality indicators, which can be easily annotated - through manual labeling or heuristic algorithms - and enable training of quality-aware motion generation models on raw motion data with mixed quality. At test time, the model can be prompted to generate high-quality motions using the quality indicators. Our method can be implemented through a simple diffusion-based framework, leading to a unified motion generate-discriminate model, which can be used to both identify and fix corrupted frames. We demonstrate that our proposed method is effective for training motion cleanup models on raw mocap data in production scenarios by applying StableMotion to SoccerMocap, a 245-hour soccer mocap dataset containing real-world motion artifacts. The trained model effectively corrects a wide range of motion artifacts, reducing motion pops and frozen frames by 68% and 81%, respectively. Results and code are available at https://yxmu.foo/stablemotion-page


Integrating Boosted learning with Differential Evolution (DE) Optimizer: A Prediction of Groundwater Quality Risk Assessment in Odisha

Subudhi, Sonalika, Pati, Alok Kumar, Bose, Sephali, Sahoo, Subhasmita, Pattanaik, Avipsa, Acharya, Biswa Mohan

arXiv.org Artificial Intelligence

Groundwater is eventually undermined by human exercises, such as fast industrialization, urbanization, over-extraction, and contamination from agrarian and urban sources. From among the different contaminants, the presence of heavy metals like cadmium (Cd), chromium (Cr), arsenic (As), and lead (Pb) proves to have serious dangers when present in huge concentrations in groundwater. Long-term usage of these poisonous components may lead to neurological disorders, kidney failure and different sorts of cancer. To address these issues, this study developed a machine learning-based predictive model to evaluate the Groundwater Quality Index (GWQI) and identify the main contaminants which are affecting the water quality. It has been achieved with the help of a hybrid machine learning model i.e. LCBoost Fusion . The model has undergone several processes like data preprocessing, hyperparameter tuning using Differential Evolution (DE) optimization, and evaluation through cross-validation. The LCBoost Fusion model outperforms individual models (CatBoost and LightGBM), by achieving low RMSE (0.6829), MSE (0.5102), MAE (0.3147) and a high R$^2$ score of 0.9809. Feature importance analysis highlights Potassium (K), Fluoride (F) and Total Hardness (TH) as the most influential indicators of groundwater contamination. This research successfully demonstrates the application of machine learning in assessing groundwater quality risks in Odisha. The proposed LCBoost Fusion model offers a reliable and efficient approach for real-time groundwater monitoring and risk mitigation. These findings will help the environmental organizations and the policy makers to map out targeted places for sustainable groundwater management. Future work will focus on using remote sensing data and developing an interactive decision-making system for groundwater quality assessment.


Permissioned Blockchain-based Framework for Ranking Synthetic Data Generators

Veeraragavan, Narasimha Raghavan, Tabatabaei, Mohammad Hossein, Elvatun, Severin, Vallevik, Vibeke Binz, Larønningen, Siri, Nygård, Jan F

arXiv.org Artificial Intelligence

Synthetic data generation is increasingly recognized as a crucial solution to address data related challenges such as scarcity, bias, and privacy concerns. As synthetic data proliferates, the need for a robust evaluation framework to select a synthetic data generator becomes more pressing given the variety of options available. In this research study, we investigate two primary questions: 1) How can we select the most suitable synthetic data generator from a set of options for a specific purpose? 2) How can we make the selection process more transparent, accountable, and auditable? To address these questions, we introduce a novel approach in which the proposed ranking algorithm is implemented as a smart contract within a permissioned blockchain framework called Sawtooth. Through comprehensive experiments and comparisons with state-of-the-art baseline ranking solutions, our framework demonstrates its effectiveness in providing nuanced rankings that consider both desirable and undesirable properties. Furthermore, our framework serves as a valuable tool for selecting the optimal synthetic data generators for specific needs while ensuring compliance with data protection principles.


JCLEC-MO: a Java suite for solving many-objective optimization engineering problems

Ramírez, Aurora, Romero, José Raúl, García-Martínez, Carlos, Ventura, Sebastián

arXiv.org Artificial Intelligence

Hence, the use of efficient search methods has experienced a significant growth in the last years, specially for those engineering problems where there are multiple objectives that require to be simultaneously optimized (Marler and Arora, 2004). A recurrent situation in engineering is the need of jointly optimizing energy consumption, cost or time, among others. All these factors constitute a paramount concern to the expert, and represent conflicting objectives, each one having a deep impact on the final solution (Marler and Arora, 2004). Initially applied to single-objective problems, metaheuristics like evolutionary algorithms (EAs) have been successfully applied to the resolution of multi-objective problems (MOPs) in engineering, such as the design of efficient transport systems (Domínguez et al., 2014) or safe civil structures (Zavala et al., 2014). The presence of a large number of objectives has been recently pointed out as an intrinsic characteristic of engineering problems (Singh, 2016), for which the currently applied techniques might not be efficient enough. It is noteworthy that other communities are also demanding novel techniques to face increasingly complex problems, what has led to the appearance of the many-objective optimization approach(von Lücken et al., 2014; Li et al., 2015). This variant of the more general multi-objective optimization (MOO) is specifically devoted to overcome the limits of existing algorithms when problems having 4 or more objectives, known as many-objective problems (MaOPs), have to be faced. Even though each metaheuristic follows different principles to conduct the search, their adaptation to deal with either MOPs or MaOPs share some similarities, such as the presence of new diversity preservation mechanisms or the use of indicators (Li et al., 2015; Mishra et al., 2015). The resulting many-objective algorithms have proven successful in the engineering field too (Li and Hu, 2014; López-Jaimes and Coello Coello, 2014; Cheng et al., 2017), where specialized software tools have begun to appear (Hadka et al., 2015).


Interactive Hyperparameter Optimization in Multi-Objective Problems via Preference Learning

Giovanelli, Joseph, Tornede, Alexander, Tornede, Tanja, Lindauer, Marius

arXiv.org Artificial Intelligence

Hyperparameter optimization (HPO) is important to leverage the full potential of machine learning (ML). In practice, users are often interested in multi-objective (MO) problems, i.e., optimizing potentially conflicting objectives, like accuracy and energy consumption. To tackle this, the vast majority of MO-ML algorithms return a Pareto front of non-dominated machine learning models to the user. Optimizing the hyperparameters of such algorithms is non-trivial as evaluating a hyperparameter configuration entails evaluating the quality of the resulting Pareto front. In literature, there are known indicators that assess the quality of a Pareto front (e.g., hypervolume, R2) by quantifying different properties (e.g., volume, proximity to a reference point). However, choosing the indicator that leads to the desired Pareto front might be a hard task for a user. In this paper, we propose a human-centered interactive HPO approach tailored towards multi-objective ML leveraging preference learning to extract desiderata from users that guide the optimization. Instead of relying on the user guessing the most suitable indicator for their needs, our approach automatically learns an appropriate indicator. Concretely, we leverage pairwise comparisons of distinct Pareto fronts to learn such an appropriate quality indicator. Then, we optimize the hyperparameters of the underlying MO-ML algorithm towards this learned indicator using a state-of-the-art HPO approach. In an experimental study targeting the environmental impact of ML, we demonstrate that our approach leads to substantially better Pareto fronts compared to optimizing based on a wrong indicator pre-selected by the user, and performs comparable in the case of an advanced user knowing which indicator to pick.


Stochastic Deep Koopman Model for Quality Propagation Analysis in Multistage Manufacturing Systems

Chen, Zhiyi, Maske, Harshal, Shui, Huanyi, Upadhyay, Devesh, Hopka, Michael, Cohen, Joseph, Lai, Xingjian, Huan, Xun, Ni, Jun

arXiv.org Artificial Intelligence

The modeling of multistage manufacturing systems (MMSs) has attracted increased attention from both academia and industry. Recent advancements in deep learning methods provide an opportunity to accomplish this task with reduced cost and expertise. This study introduces a stochastic deep Koopman (SDK) framework to model the complex behavior of MMSs. Specifically, we present a novel application of Koopman operators to propagate critical quality information extracted by variational autoencoders. Through this framework, we can effectively capture the general nonlinear evolution of product quality using a transferred linear representation, thus enhancing the interpretability of the data-driven model. To evaluate the performance of the SDK framework, we carried out a comparative study on an open-source dataset. The main findings of this paper are as follows. Our results indicate that SDK surpasses other popular data-driven models in accuracy when predicting stagewise product quality within the MMS. Furthermore, the unique linear propagation property in the stochastic latent space of SDK enables traceability for quality evolution throughout the process, thereby facilitating the design of root cause analysis schemes. Notably, the proposed framework requires minimal knowledge of the underlying physics of production lines. It serves as a virtual metrology tool that can be applied to various MMSs, contributing to the ultimate goal of Zero Defect Manufacturing.


Automated Assessment of Transthoracic Echocardiogram Image Quality Using Deep Neural Networks

Labs, Robert B., Vrettos, Apostolos, Loo, Jonathan, Zolgharni, Massoud

arXiv.org Artificial Intelligence

Standard views in two-dimensional echocardiography are well established but the quality of acquired images are highly dependent on operator skills and are assessed subjectively. This study is aimed at providing an objective assessment pipeline for echocardiogram image quality by defining a new set of domain-specific quality indicators. Consequently, image quality assessment can thus be automated to enhance clinical measurements, interpretation, and real-time optimization. We have developed deep neural networks for the automated assessment of echocardiographic frame which were randomly sampled from 11,262 adult patients. The private echocardiography dataset consists of 33,784 frames, previously acquired between 2010 and 2020. Deep learning approaches were used to extract the spatiotemporal features and the image quality indicators were evaluated against the mean absolute error. Our quality indicators encapsulate both anatomical and pathological elements to provide multivariate assessment scores for anatomical visibility, clarity, depth-gain and foreshortedness, respectively.


How to Evaluate Solutions in Pareto-based Search-Based Software Engineering? A Critical Review and Methodological Guidance

Li, Miqing, Chen, Tao, Yao, Xin

arXiv.org Artificial Intelligence

With modern requirements, there is an increasing tendancy of considering multiple objectives/criteria simultaneously in many Software Engineering (SE) scenarios. Such a multi-objective optimization scenario comes with an important issue --- how to evaluate the outcome of optimization algorithms, which typically is a set of incomparable solutions (i.e., being Pareto non-dominated to each other). This issue can be challenging for the SE community, particularly for practitioners of Search-Based SE (SBSE). On one hand, multiobjective optimization may still be relatively new to SE/SBSE researchers, who may not be able to identify right evaluation methods for their problems. On the other hand, simply following the evaluation methods for general multiobjective optimisation problems may not be appropriate for specific SE problems, especially when the problem nature or decision maker's preferences are explicitly/implicitly available. This has been well echoed in the literature by various inappropriate/inadequate selection and inaccurate/misleading uses of evaluation methods. In this paper, we carry out a critical review of quality evaluation for multiobjective optimization in SBSE. We survey 717 papers published between 2009 and 2019 from 36 venues in 7 repositories, and select 97 prominent studies, through which we identify five important but overlooked issues in the area. We then conduct an in-depth analysis of quality evaluation indicators and general situations in SBSE, which, together with the identified issues, enables us to provide a methodological guidance to selecting and using evaluation methods in different SBSE scenarios.


Search-Based Software Engineering for Self-Adaptive Systems: One Survey, Five Disappointments and Six Opportunities

Chen, Tao, Li, Miqing, Li, Ke, Deb, Kalyanmoy

arXiv.org Artificial Intelligence

Search-Based Software Engineering (SBSE) is a promising paradigm that exploits computational search to optimize different processes when engineering complex software systems. Self-adaptive system (SAS) is one category of such complex systems that permits to optimize different functional and non-functional objectives/criteria under changing environment (e.g., requirements and workload), which involves problems that are subject to search. In this regard, over years, there have been a considerable amount of work that investigates SBSE for SASs. In this paper, we provide the first systematic and comprehensive survey exclusively on SBSE for SASs, covering 3,740 papers in 27 venues from 7 repositories, which eventually leads to several key statistics from the most notable 73 primary studies in this particular field of research. Our results, surprisingly, have revealed five disappointed issues that are of utmost importance, but have been overwhelmingly ignored in existing studies. We provide evidences to justify our arguments against the disappointments and highlight six emergent, but currently under-explored opportunities for future work on SBSE for SASs. By mitigating the disappointed issues revealed in this work, together with the highlighted opportunities, we hope to be able to excite a much more significant growth on this particular research direction.